Explicit Learning in ACT-R

نویسنده

  • Niels Taatgen
چکیده

A popular distinction in the learning literature is the distinction between implicit and explicit learning. Although many studies elaborate on the nature of implicit learning, little attention is left for explicit learning. The unintentional aspect of implicit learning corresponds well to the mechanistic view of learning employed in architectures of cognition. But how to account for deliberate, intentional, explicit learning? This chapter argues that explicit learning can be explained by strategies that exploit implicit learning mechanisms. This idea is explored and modelled using the ACT-R theory (Anderson, 1993). An explicit strategy for learning facts in ACTR’s declarative memory is rehearsal, a strategy that uses ACT-R’s activation learning mechanisms to gain deliberate control over what is learned. In the same sense, strategies for explicit procedural learning are proposed. Procedural learning in ACT-R involves generalisation of examples. Explicit learning rules can create and manipulate these examples. An example of these explicit rules will be discussed. These rules are general enough to be able to model the learning of three different tasks. Furthermore, the last of these models can explain the difference between adults and children in the discrimination-shift task. Introduction One of the basic assumptions all architectures of cognition share is that all learning can be described with a fixed set of mechanisms. The term ‘mechanism’ refers to the fact that learning is unintentional and is always at work. The term ‘fixed’ refers to the fact that learning never changes, and is the same for each person, regardless of age or intelligence. This view of intelligence seems to be at odds with the general view of learning in psychology. The hallmark of learning is adaptation, the capacity of the organism to change its behaviour to suit a particular environment. This does not necessarily imply that learning itself is susceptible to adaptation. But work from developmental psychology clearly suggests that learning changes with age. One classical experiment that shows the way children learn differs from adults is discrimination-shift learning (Kendler & Kendler, 1959). We will discuss this experiment in more detail later on. A second counter-intuitive aspect of learning mechanisms is the fact that learning is unintentional. Explicit Learning in ACT-R 2 Although we have no complete control over learning, the idea that we have no control at all seems to be too strong. The distinction between implicit and explicit learning is centred around this issue. Implicit learning is unconscious and unintentional, so is consistent with the mechanistic and fixed view of learning in architectures of cognition. In explicit learning, on the other hand, goals and intentions determine what is learned. Moreover, many studies suggest that explicit learning is much more powerful than implicit learning (see for an overview Shanks & John, 1994). Some things can’t be learned by implicit learning, but can be learned by explicit learning. Finally, things that can be learned implicitly can often be learned faster explicitly. Another aspect of learning that seems to be at odds with mechanistic learning is learning through insight, as discussed in section 4 of this book. Many learning mechanisms model gradual learning. Learning in the PDP neural network tradition is gradual (Rumelhart & McClelland, 1986), and chunking in Soar is inspired by, and can explain the power law of practice (Newell & Rosenbloom, 1981). A property of insight learning is a sudden qualitative shift. Take for example match stick algebra (MSA), as discussed in Knoblich and Ohlsson (1997). In MSA, the insights subjects gain concern a number of constraints that are part of normal algebra. The subjects have to discover that these constraints may be violated in MSA. Knoblich and Ohlsson conclude that once a subject has relaxed a certain constraint, it stays relaxed. In other words, the subject has learned something about a certain constraint in the context of MSA. This type of learning is not gradual, but step wise. So the central question of this chapter is how learning mechanisms in architectures of cognition can be made consistent with an adaptive view of learning that allows for flashes of insight. Or, stated in another way, we need a theory of explicit learning. We will concentrate the discussion on the ACT-R architecture (Anderson, 1993), but some aspects may apply to other architectures as well. Since mechanisms in architectures are fixed, we have to seek for other ways to explain adaptation in learning. The only thing that changes over time in an architecture is the knowledge in its memory. An explanation of changes in learning has to be an explanation in terms of changes in the content of memory. As a consequence, the learning capabilities of an individual can be divided into two classes, the implicit learning mechanisms of the architecture and explicit learning strategies in its memory. What is the nature of these explicit learning strategies? There are two possibilities. The first possibility is, that an explicit strategy can directly effect memory. In that case, we might have a rule in procedural memory that can directly change other rules by Explicit Learning in ACT-R 3 adding conditions, changing weights, etc. This, however, doesn’t seems to be a good option. One of the essential properties of procedural memory is, that it cannot access its contents directly. To be able to intentionally change a rule we need such a direct access. Moreover, it would violate one of the basic assumptions of an architecture of cognition, that the learning mechanisms in the architecture are sufficient. So we have to focus on a second, more likely possibility, that explicit learning is built on top of implicit learning in the sense that it learns by using implicit learning mechanisms. A relatively simple example might explain this point. It is a well-known fact that people aren’t very good at remembering facts that cannot easily be related to other knowledge they have. The whole tradition of theories about short-term memory that started with Miller’s magical number seven is based on this fact (Miller, 1956). Atkinson and Shiffrin (1968) introduced the mechanism of rehearsal, to account for the fact that some facts in short-term memory do get stored in long-term memory, and others do not. Closer scrutiny however shows, that rehearsal isn’t really a mechanism in the sense discussed earlier. Rehearsal isn’t always at work, most of the time it isn’t. People only rehearse if they have consciously decided to do so. So rehearsal isn’t mechanistic, it is tied to intentions. Neither is rehearsal fixed. It turns out small children do not use rehearsal at all, so it must either be a dormant strategy that surfaces at some point, or a strategy that children acquire at some point during development. So rehearsal is a typical example of learning that is not part of the architecture, but rather a strategy represented in memory. We will return to this issue after a brief introduction of learning in the ACT-R architecture. Learning in ACT-R The ACT-R architecture has two memory systems, a declarative memory and a procedural memory. Associated to each of these memory systems is a number of learning mechanisms that add and maintain the knowledge in them. ACT-R is based on the theory of rational analysis (Anderson, 1990), and the learning mechanisms of ACT-R are no exception to this. According to rational analysis, the cognitive system is optimised to its environment. So a careful examination of the environment can shed as much light on the cognitive system as studying the cognitive system itself. Implicit learning in declarative memory A first example of the role of rational analysis in ACT-R is declarative memory. Elements in declarative memory, called chunks, closely resemble nodes in a semantic network. Each Explicit Learning in ACT-R 4 chunk has an activation value. This value represents the odds that the chunk will be needed in the current context. To be able to estimate this value, the learning mechanisms have to keep a close track of the environment. The activation value of a chunk has two parts, the base-level activation and activation through associations with other chunks. The latter is context-dependent: once a certain chunk is part of ACT-R’s current context, all chunks with an association to this chunk gain activation temporarily, since the presence of an associated chunk increases their chance of being needed. The base-level activation of a chunk is based on its past use. Two factors play a role: how many times a chunk was needed in the past, and how long ago this was. The learning rule used in ACT-R is derived from Bayesian statistics. Anderson shows that this rule both reflects regularities in the environment and empirical data from memory experiments. Association strengths are learned using similar rules. ACT-R uses activation values of chunks to order the chunks in the matching process, because the activation value determines the time it takes to retrieve the chunk. Implicit learning in procedural memory As the name implies, knowledge in production memory is represented by production rules. Associated with each rule are a number of parameters. The strength parameter is used to reflect past use of a rule, and is governed by the same laws as the base-level activation of chunks. The a, b, q and r parameters of a rule reflects its past cost-benefit characteristics. The a and b parameter represent the current and future cost of a rule, and the q and r parameters the chance of succeeding and reaching the goal. Bayesian statistics are again used to estimate these parameters. The cost-benefit parameters are used in conflict resolution. For each rule that is allowed to match, an expected outcome is calculated using the equation: expected outcome = PG-C In this equation, P is the estimated chance of success, calculated from q and r, G the estimated value of the goal and C the estimated cost of reaching the goal, calculated from a and b. New rules are learned by the analogy mechanism. This involves generalisation of examples in declarative memory whenever a goal turns up that resembles the example. The examples are stored in specialised chunks, dependency chunks, that contain all the information needed: an example goal, an example solution, chunks (called constraints) that must be retrieved from declarative memory in order to create a solution, and sometimes additional subgoals that must be satisfied before the main goal can be reached. Explicit Learning in ACT-R 5 Figure 1 shows and example of deriving a rule for doing a simple addition. The chunk dependency2+3 points to example-goal1 in which the addition 2+3 must still be calculated. In example-solution1 the answer is supplied in the answer slot. The additional fact needed to derive the rule is the fact that 2+3 equals 5. Whenever a new addition problem turns up, the rule production-problem-production1 will be derived and added to production memory. Note that all identifiers starting with an =-sign are variables, and can be matched with arbitrary chunks in declarative memory. How can ACT-R’s learning mechanisms, which are clearly implicit in nature, give rise to explicit, intentional learning? The next section will explore this question. Explicit learning in ACT-R Explicit learning in declarative memory The implicit mechanisms to calculate the activation of a chunk in declarative memory provide a good estimate of the chance the chunk will be needed. But sometimes it is not enough to rely on the environment to cue the right facts the right number of times. As already observed in the introduction, rehearsal can provide a strategy that can help us memorise facts. Since baselevel activation is calculated from the number of times a certain fact is retrieved, rehearsal can emulate extra uses of a certain fact. In other words, rehearsal tricks the base-level-learning mechanism into increasing the activation of a certain chunk. Craik and Lockhart (1972) have observed that the effectiveness of rehearsal depends on dependency2+3 isa dependency goal example-goal1 modified example-solution1 constraints fact2+3 example-goal1 isa addition-problem arg1 two arg2 three answer nil example-solution1 isa addition-problem arg1 two arg2 three answer five fact2+3 isa addition-fact addend1 two addend2 three sum five (p addition-problem-production1 =example-goal-variable> isa addition-problem arg1 =two-variable arg2 =three-variable answer nil =fact2+3-variable> isa addition-fact addend1 =two-variable addend2 =three-variable sum =five-variable ==> =example-goal-variable> answer =five-variable) Figure 1. Example of ACT-R’s analogy process. The left column shows the contents of declarative memory, the right column shows the analogised production rule. Explicit Learning in ACT-R 6 the level of processing. Simple maintenance rehearsal turns out to be much less effective than elaborate rehearsal, in which subjects try to associate the new fact with existing facts. This distinction can be easily explained in ACT-R, since elaborate rehearsal not only increases the baselevel activation, but also increases associations with other chunks. In a previous study I have showed how rehearsal can be modelled in ACT-R (Taatgen, 1996). Rehearsal is implemented by a set of production rules that create subgoals to do the rehearsal. The basic rehearsal productions can be enhanced by task-specific rules that can do elaborate rehearsal. The model was able to reproduce the data from classical free-recall experiments, and provide explanations for the primacy and recency effect. Rehearsal is clearly intentional in this model: it only occurs if ACT-R decides to push a rehearsal goal. Explicit learning in procedural memory The analogy mechanism creates rules from dependency chunks in declarative memory. The architecture, however, does not specify how dependencies are created. This is not necessary, since dependencies are just chunks in declarative memory. Dependencies may be picked up from the environment, or may be supplied by a parent or teacher. On the other hand, chunks may be created and manipulated by production rules. So an explicit theory of learning procedural rules involves dependency-manipulating rules in production memory. These rules create dependencies that are compiled into new rules by the analogy mechanism (figure 2). Again, strategies in procedural memory manipulate the implicit mechanisms: in stead of slowly gathering and generalising regularities from the environment, intentionally created rules are learned. The process described here is closely related to the concept of mental models (JohnsonLaird, 1983). A mental model is a mental construct that helps us predicting properties and events in the outside world. A dependency created by explicit learning strategies is a kind of mental model. procedural memory declarative memory Explicit learning strategies Dependencies Explicitly learned rules Figure 2. The process of explicit learning for procedural memory Explicit Learning in ACT-R 7 How to decide whether to do explicit learning? Since explicit learning is intentional, a decision has to be made at some point to start a learning attempt. We’ll concentrate the discussion on explicit strategies for procedural knowledge, but similar points may be made on the decision to do rehearsal. The easiest case that indicates an explicit strategy is needed, is when we get a certain outcome which turns out to be different from our expectations. An expectation-failure, in terms of Schank (1986). Often, however, indications aren’t as clear. At some point we have to make a decision that our current approach isn’t working, and something else is needed. Decisions in ACT-R are based on a cost-benefit analysis of the available options. So suppose we present a new task to a subject. We have supplied some information to the subject about the task. The subject has several options. Should he just start an attempt in accomplishing the task using the information he has? Should he ask for advise, or go to the library to find more information? Or should he first reflect on the task, to come up with some new strategies? Suppose we restrict our analysis to two possible strategies: search and reflection. If the subject chooses search, he attempts to accomplish the task given his current knowledge about the task. If he chooses reflection on the other hand, he will try explicit learning strategies to come up with new rules. In Taatgen (1997) I describe a model that calculates the expected outcome of either strategy over time. I will briefly summarise the results here. The expected outcome has three components, as mentioned before, P, the chance of reaching the goal using the strategy, G, the estimated value of the goal, and C, the estimated cost of a strategy. Since G is not related to a strategy, we will concentrate on P and C. For search, the cost is relatively constant, and typically low. The chance of success of search, however, decreases over time if the goal is not reached, reflecting the fact that repeated failure is a strong indication that the current knowledge is false or insufficient. An assumption of reflection is, that it needs existing knowledge to work with. You can’t get something out of nothing. This knowledge can have several sources, as we will see later on. For now, we will assume that the only source of knowledge is implicit knowledge gained through search. So if we have little implicit knowledge to work with, the cost of reflection is high, since it is difficult to come up with something useful. As our explicit knowledge increases, the cost to come up with something new increases as well. A final assumption of the model is, that search increases implicit knowledge, reflection increases explicit knowledge, and explicit knowledge is more powerful. This enables us to describe the increase of knowledge over time, related to the conflict resolution mechanism that Explicit Learning in ACT-R 8 selects the strategy with the highest expected outcome. Figure 3 shows an example of the results of the model. Figure 3a shows the growth of knowledge about a specific problem, and figure 3b shows the expected outcome of search and reflection. The discontinuities in both graphs indicate a change of strategy. A nice aspect of this model is, that it can give a rational account of the explore-impasseinsight-execute stages often identified in problem solving that requires insight. In the explore stage, the subject still thinks his existing knowledge is the best way to reach the goal, in the impasse stage he decides that the existing knowledge is insufficient, and in the insight stage reflection starts in an attempt to gain new explicit knowledge. Finally, in the execute stage the search process continues using the newly gained knowledge. An ACT-R model of a simple explicit strategy The model discussed above is just a mathematical model, and only represents the amounts of several types of knowledge with their cost-benefit characteristics. To see if the approach discussed here really works, we have to make a detailed ACT-R model in which actual learning occurs. The main parts of interest in such a model are the dependency-creating rules, since these rules form the explicit learning part of the model. These rules have to be quite general, since they must be applicable to a wide range of problems. So general rules are in principle contextindependent. To be able to work though, they operate on context-dependent knowledge in declarative memory. Possible sources of knowledge are: Task instructions and examples Relevant facts and biases in declarative memory 0 2 4 6 8 10 0 100 200 300 400 "insight" time (sec) Knowledge gained by implicit learning Knowledge gained by explicit learning a m ou nt o f k no w le dg e

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Implicit and explicit learning in ACT-R

A useful way to explain the notions of implicit and explicit learning in ACT-R is to define implicit learning as learning by ACT-R's learning mechanisms, and explicit learning as the results of learning goals. This idea complies with the usual notion of implicit learning as unconscious and always active and explicit learning as intentional and conscious. Two models will be discussed to illustra...

متن کامل

ACT-R Symposium: implicit and explicit learning

A useful way to explain the notions of implicit and explicit learning in ACT-R is to define implicit learning as learning by ACT-R's learning mechanisms, and explicit learning as the results of learning goals. This idea complies with the usual notion of implicit learning as unconscious and always active and explicit learning as intentional and conscious. Two models will be discussed to illustra...

متن کامل

The Effect of Explicit and Implicit Instruction through Plays on EFL Learners’ Speech Act Production

Despite the general findings that address the positive contribution of teaching pragmatic features to interlanguage pragmatic development, the question as to the most effective method is far from being resolved. Moreover, the potential of literature as a means of introducing learners into the social practices and norms of the target culture, which underlie the pragmatic competence, has not been...

متن کامل

Discourse Structures of Condolence Speech Act

Condolence is part of Austin’s expressive speech act and is related to Searle’s behabitives illocutionary act. Although a theoretically sound issue in pragmatics, condolence speech act has not been investigated as much as other speech acts in discourse-related studies. This paper aims at investigating interjections and intensifiers while performing condolence speech act among Persian and Englis...

متن کامل

Implicit versus explicit: an ACT-R learning perspective 1 Implicit versus explicit: an ACT-R learning perspective Commentary on Dienes & Perner: A theory of implicit and explicit knowledge

Dienes and Perner propose a theory of implicit and explicit knowledge that is not entirely complete. It does not address many of the empirical issues, nor does it explain the difference between implicit and explicit learning. It does, however, provide a possible unified explanation as opposed to the more binary theories like the systems and the processing theories of implicit and explicit memor...

متن کامل

Study of Explicit Knowledge Effects on Implicit Motor Learning in Older Adults

Objective: The motor skill performance and learning in older adults are of a great importance. The primary purpose of this study was to determine whether older adult’s explicit knowledge effect on implicit learning and motor performance and which are typical for rehabilitation and skills acquired in older adults. Materials & Methods: In this comparative study a serial reaction time task by u...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996